4 research outputs found

    Predicting job execution time on a high-performance computing cluster using a hierarchical data-driven methodology

    Get PDF
    Nowadays, evaluating the performance of a vehicle before the production phase is challenging and important. In the automotive industry, many virtual simulations are needed to model the vehicle behavior in the best possible way. However, these simulations require a lot of time without the user knowing their runtime in advance. Knowing the required time in advance would allow the user to manage the simulations more effectively and choose the best strategy to use the available computational resources. For this reason, we present an innovative data-driven method to estimate in advance the execution time of simulations. Our approach integrates unsupervised techniques, such as constrained k-means clustering, with classification and regression algorithms based on tree structures. In this paper, we present an innovative and hierarchical data-driven method for estimating the execution time of jobs. Numerous experiments were conducted on a real dataset to verify the effectiveness of the proposed approach. The experimental results show that the proposed method is promising

    Cinematographic Shot Classification with Deep Ensemble Learning

    No full text
    Cinematographic shot classification assigns a category to each shot either on the basis of the field size or on the movement performed by the camera. In this work, we focus on the camera field of view, which is determined by the portion of the subject and of the environment shown in the field of view of the camera. The automation of this task can help freelancers and studios belonging to the visual creative field in their daily activities. In our study, we took into account eight classes of film shots: long shot, medium shot, full figure, american shot, half figure, half torso, close up and extreme close up. The cinematographic shot classification is a complex task, so we combined state-of-the-art techniques to deal with it. Specifically, we finetuned three separated VGG-16 models and combined their predictions in order to obtain better performances by exploiting the stacking learning technique. Experimental results demonstrate the effectiveness of the proposed approach in performing the classification task with good accuracy. Our method was able to achieve 77% accuracy without relying on data augmentation techniques. We also evaluated our approach in terms of f1 score, precision, and recall and we showed confusion matrices to show that most of our misclassified samples belonged to a neighboring class

    Movie Lens: Discovering and Characterizing Editing Patterns in the Analysis of Short Movie Sequences

    No full text
    Video is the most widely used media format. Automating the editing process would impact many areas, from the film industry to social media content. The editing process defines the structure of a video. In this paper, we present a new method to analyze and characterize the structure of 30-second videos. Specifically, we study the video structure in terms of sequences of shots. We investigate what type of relation there is between what is shown in the video and the sequence of shots used to represent it and if it is possible to define editing classes. To this aim, labeled data are needed, but unfortunately they are not available. Hence, it is necessary to develop new data-driven methodologies to address this issue. In this paper we present \XXX, a data driven approach to discover and characterize editing patterns in the analysis of short movie sequences. Its approach relies on the exploitation of the Levenshtein distance, the K-Means algorithm, and a Multilayer Perceptron (MLP). Through the Levenshtein distance and the K-Means algorithm we indirectly label 30 seconds long movie shot sequences. Then, we train a Multilayer Perceptron to assess the validity of our approach. Additionally the MLP helps domain experts to assess the semantic concepts encapsulated by the identified clusters. We have taken out data from the Cinescale dataset. We have gathered 23 887 shot sequences from 120 different movies. Each sequence is 30 seconds long. The performance of \XXX\ in terms of accuracy varies (93\% - 77\%) in relation to the number of classes considered (4-32). We also present a preliminary characterization concerning the identified classes and their relative editing patterns in 16 classes scenario, reaching an overall accuracy of 81\%
    corecore